Skip to content

Conversation

@addsubmuldiv
Copy link
Collaborator

PR type

  • Bug Fix
  • New Feature
  • Document Updates
  • More Models or Datasets Support

PR information

Write the detail information belongs to this PR.

Experiment results

Paste your experiment result here(if needed).

Copilot AI review requested due to automatic review settings January 8, 2026 03:01
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @addsubmuldiv, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request expands the NPU (Ascend) training examples by adding new multi-node configurations for Megatron SFT. It also refactors the directory structure for existing Qwen3 LoRA training scripts, consolidating them under a dedicated qwen3 folder, and updates one of these scripts to utilize a newer Qwen model. These changes aim to enhance the clarity and breadth of NPU-specific training demonstrations.

Highlights

  • New Multi-Node NPU Examples: Introduced two new shell scripts (node1.sh, node2.sh) to demonstrate multi-node Megatron SFT training specifically for NPU (Ascend) environments, configured for a 2-node, 8-card per node setup.
  • Example Script Reorganization: Several existing Qwen3 LoRA training scripts (Deepspeed, FSDP, Megatron) have been moved into a new examples/ascend/train/qwen3/ subdirectory for better organization.
  • Model Update in Megatron Example: The qwen3_lora_megatron.sh script was updated to use the Qwen3-8B model instead of Qwen2.5-7B-Instruct.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @addsubmuldiv, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the Ascend NPU training examples by adding multi-node support for Megatron, enabling distributed fine-tuning of large language models like Qwen3-8B. Concurrently, it improves the overall organization of existing Qwen3 training scripts and updates one of them to leverage the latest Qwen3-8B model, streamlining the development and deployment of advanced AI models on Ascend hardware.

Highlights

  • New Multi-Node Megatron Examples: Introduced two new shell scripts (node1.sh and node2.sh) to demonstrate multi-node Megatron training on Ascend NPUs. These scripts are configured for a 2-node setup, each with 8 cards, for fine-tuning the Qwen/Qwen3-8B model using LoRA.
  • Example Script Reorganization: Existing Qwen3 LoRA training examples (Deepspeed, FSDP, and Megatron) have been moved and reorganized into a new, dedicated directory structure under examples/ascend/train/qwen3/ for better clarity and management.
  • Model Update in Megatron Example: The qwen3_lora_megatron.sh script has been updated to utilize the Qwen/Qwen3-8B model instead of the older Qwen/Qwen2.5-7B-Instruct, along with corresponding adjustments to the save path.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a multi-node training example for Megatron on Ascend NPUs, along with some file restructuring. The example scripts are a good starting point, but have some issues. I've pointed out a critical error in node1.sh where MASTER_ADDR is incorrectly set for a multi-node scenario. I've also suggested improvements for placeholder values to make the scripts more user-friendly. Furthermore, I've highlighted significant code duplication between node1.sh and node2.sh and recommended refactoring them into a single, parameterized script for better maintainability. The other changes in the PR are fine.

ASCEND_RT_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
NNODES=2 \
NODE_RANK=0 \
MASTER_ADDR=127.0.0.1 \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

MASTER_ADDR is set to 127.0.0.1, which is incorrect for a multi-node setup as it refers to the local machine. For this example to work across multiple nodes, this should be the IP address of the master node, which must be reachable from all other nodes. Please use a placeholder like in node2.sh.

Suggested change
MASTER_ADDR=127.0.0.1 \
MASTER_ADDR=xxx.xxx.xxx.xxx \

MASTER_ADDR=127.0.0.1 \
MASTER_PORT=29500 \
NPROC_PER_NODE=8 \
HCCL_SOCKET_IFNAME=xxx \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The value xxx for HCCL_SOCKET_IFNAME is a placeholder. It would be helpful to add a comment above this line explaining that this needs to be replaced with the actual network interface name used for communication between nodes (e.g., eth0).

Comment on lines +3 to +33
ASCEND_RT_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
NNODES=2 \
NODE_RANK=0 \
MASTER_ADDR=127.0.0.1 \
MASTER_PORT=29500 \
NPROC_PER_NODE=8 \
HCCL_SOCKET_IFNAME=xxx \
megatron sft \
--model 'Qwen/Qwen3-8B' \
--dataset 'AI-ModelScope/alpaca-gpt4-data-zh#1000' \
--save './SAVE' \
--train_type 'lora' \
--lora_rank 8 \
--lora_alpha 32 \
--target_modules 'all-linear' \
--tensor_model_parallel_size 2 \
--pipeline_model_parallel_size 1 \
--context_parallel_size 1 \
--sequence_parallel true \
--micro_batch_size 1 \
--global_batch_size 64 \
--recompute_granularity selective \
--recompute_modules core_attn \
--cross_entropy_loss_fusion true \
--no_gradient_accumulation_fusion true \
--lr 1e-4 \
--lr_warmup_fraction 0.05 \
--min_lr 1e-5 \
--max_epochs 1 \
--log_interval 5 \
--num_workers 4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The scripts node1.sh and node2.sh are nearly identical, which introduces code duplication and can make maintenance difficult. Consider merging them into a single script that accepts NODE_RANK and MASTER_ADDR as command-line arguments. This would make the example cleaner, more robust, and easier for users to adapt. For example, a single run.sh could be used as bash run.sh <NODE_RANK> <MASTER_ADDR>.

MASTER_ADDR=xxx.xxx.xxx.xxx \
MASTER_PORT=29500 \
NPROC_PER_NODE=8 \
HCCL_SOCKET_IFNAME=xxx \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The value xxx for HCCL_SOCKET_IFNAME is a placeholder. It would be helpful to add a comment above this line explaining that this needs to be replaced with the actual network interface name used for communication between nodes (e.g., eth0).

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds comprehensive NPU (Ascend) training examples for Qwen3 models, including a new multi-node Megatron setup and additional training configurations.

  • Updates existing Qwen3 Megatron example from Qwen2.5-7B to Qwen3-8B
  • Adds new FSDP training configuration for Qwen3-32B
  • Adds new DeepSpeed Zero3 training configuration for Qwen3-32B
  • Introduces multi-node Megatron training examples for 2-node setups with 8 cards per node

Reviewed changes

Copilot reviewed 3 out of 6 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
examples/ascend/train/qwen3/qwen3_lora_megatron.sh Updates model reference from Qwen2.5-7B-Instruct to Qwen3-8B and corresponding output path
examples/ascend/train/qwen3/qwen3_lora_fsdp/train.sh Adds new FSDP training script for Qwen3-32B with 8-device parallelism
examples/ascend/train/qwen3/qwen3_lora_fsdp/fsdp.json Adds FSDP configuration with Qwen3DecoderLayer wrapping and full sharding strategy
examples/ascend/train/qwen3/qwen3_lora_deepspeed.sh Adds new DeepSpeed Zero3 training script for Qwen3-32B
examples/ascend/multi-node/megatron/node1.sh Adds multi-node master node script with tensor parallelism and sequence parallelism
examples/ascend/multi-node/megatron/node2.sh Adds multi-node worker node script with matching configuration to node1

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +6 to +9
MASTER_ADDR=xxx.xxx.xxx.xxx \
MASTER_PORT=29500 \
NPROC_PER_NODE=8 \
HCCL_SOCKET_IFNAME=xxx \
Copy link

Copilot AI Jan 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The placeholder values 'xxx.xxx.xxx.xxx' for MASTER_ADDR and 'xxx' for HCCL_SOCKET_IFNAME need to be replaced with actual values. Consider adding a comment explaining that users must replace these placeholders with their actual master node IP address and network interface name (e.g., eth0, ens33).

Copilot uses AI. Check for mistakes.
NODE_RANK=0 \
MASTER_ADDR=127.0.0.1 \
MASTER_PORT=29500 \
NPROC_PER_NODE=8 \
Copy link

Copilot AI Jan 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The placeholder value 'xxx' for HCCL_SOCKET_IFNAME needs to be replaced with the actual network interface name. Consider adding a comment explaining that users must replace this placeholder with their actual network interface name (e.g., eth0, ens33).

Suggested change
NPROC_PER_NODE=8 \
NPROC_PER_NODE=8 \
# Replace 'xxx' with your actual network interface name (e.g., eth0, ens33).

Copilot uses AI. Check for mistakes.
@addsubmuldiv addsubmuldiv merged commit 5a6eeda into modelscope:main Jan 8, 2026
8 of 9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants